Skip to content

Productionize the dataset we are using for BackendBench #93

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 37 commits into
base: main
Choose a base branch
from

Conversation

PaliC
Copy link
Contributor

@PaliC PaliC commented Aug 19, 2025

Do a bug in gh I couldn't update this pr, the review history is found at #57

This looks like a much bigger PR than it actually is. This is most of the work we need to do on the repo end for #44

This PR

  1. Adds a dataloaders folder to support loading things from parquet files, huggingface urls, and trace files (BackendBench/data_loaders.py)
  2. Creates a script that let's one go back and forth between parquet and trace files (BackendBench/scripts/parquet_trace_converter.py)
  3. Defines a schema for what the final dataset ought to look like
  4. Adds a few filters to help filter out bad inputs (in this case ops we likely don't want to benchmark because they are fill or view). This should be scalable to add more filters like outputs being close to zero or runtime is too short.

I think 3 and 4 definitely require the most review.
The schema is described in the comment at the top of BackendBench/scripts/parquet_trace_converter.py

I'd also take a close look at the filters on BackendBench/scripts/dataset_filters.py as this contains a bunch of ops that seem to not be useful in a benchmark, but I'd like a second look.

BackendBench/scripts/parquet_trace_converter.py offers a trace-to-parquet mode and a parquet-to-trace mode. parquet-to-trace mode is self explanatory. trace-to-parquet mode actually creates two parquet files. The first is a "dev" parquet which contains a bunch of extra metadata on the inputs while the final parquet (I refer to as prod) is the one that should be used in benchmarks and is the result of all the filtering.

You can find explanations of the trace files (this can be removed as it should not be permanent) and the argument schema at https://huggingface.co/datasets/GPUMODE/huggingface_op_trace (I will add the parquet schema once it is done and finalized).

The results of creating and uploading a parquet to huggingface - https://huggingface.co/datasets/GPUMODE/huggingface_op_trace

A validation that this works is that this the roundtrip conversion of the tritonbench data trace -> parquet (dev) -> trace
https://www.diffchecker.com/YYiJ43cq/. The differences are attributed to the fact that we do rename the op "aten.sum.SymInt" to "aten.sum.dim_IntList".

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Aug 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants